![]() TIME-OF-FLIGHT VIEWING APPARATUS SYSTEM
专利摘要:
The invention relates to a TOF camera system comprising several cameras, at least one of the cameras being a TOF camera, in which the cameras are assembled on a common support and take an image of the same scene and in which at least two cameras are controlled by different control parameters. 公开号:BE1022488B1 申请号:E2014/0089 申请日:2014-02-10 公开日:2016-05-04 发明作者:Nieuwenhove Daniel Van;Julien Thollot 申请人:Softkinetic Sensors Nv; IPC主号:
专利说明:
4th-shot time-of-flight camera system Field of the invention The present invention relates to time-of-flight (TOF) telemetry imaging systems, namely TOF camera systems. In particular, the object of the present invention is to provide a 3D image of a high quality scene. Background of the Invention Artificial vision is a growing field of research that includes processes for the acquisition, processing, analysis and understanding of images. The main guiding principle in this area is to double the capabilities of the human vision system by electronically perceiving and understanding images of a scene. In particular, a research theme in artificial vision is the perception of depth, or in other words, three-dimensional vision (3D). For humans, the perception of depth is produced by what is called the stereoscopic effect by which the human brain fuses two slightly different images of a scene captured by both eyes and finds, among other things, information from depth. In addition, recent studies have shown that the ability to recognize objects in a scene also contributes greatly to the perception of depth. For camera systems, depth information is not readily available and requires complex processes and systems. By taking the image of a scene, a particular conventional two-dimensional (2D) camera system associates each point of the scene with a given RGB color information. At the end of the imaging process, a 2D color map of the scene is created. A standard 2D camera system can not recognize objects in a scene easily from this color map as the color is highly dependent on variable scene lighting and as it does not intrinsically contain dimensional information. New technologies have been introduced to develop the artificial vision and in particular to develop 3D imaging, allowing in particular the direct capture of information related to the depth and the indirect acquisition of dimensional information related to an object or a scene. Recent advances in 3D imaging systems are impressive and have led to growing interest from industry, academia and the consumer society. The most common technologies used to create 3D images are based on the stereoscopic effect. Two cameras take pictures of the same scene, but they are separated by a certain distance - just like human eyes. A computer compares the images while shifting the two images together one above the other to find the parts that are in correspondence and those that are not. The amount of shift is called the disparity. The disparity to which the objects in the image best match is used by the computer to compute distance information, namely a depth map, using additional camera sensors, geometric parameters, and lens specifications. Another newer and different technology is represented by the Time-of-Flight (TOF) camera system 3 shown in FIG. 1. The TOF 3 camera system includes Shooting 1 with a dedicated lighting unit 18 and data processing means 4. The TOF camera systems are capable of capturing 3D images of a scene 15 by analyzing the flight time of the camera. a light from a light source 18 to an object. Such 3D camera systems are now used in many applications where measurement of depth or distance information is required. Standard camera systems, such as Red-Green-Blue (RGB) camera systems, are passive technologies, that is, they use ambient light to capture images and are not based on the emission of additional light. In contrast, the basic operational principle of a TOF camera system is to actively illuminate a scene with modulated light 16 at a predetermined wavelength using the dedicated lighting unit. for example with some light pulses of at least one predetermined frequency. The modulated light is reflected back from objects inside the scene. A lens collects the reflected light 17 and forms an image of the objects on an imaging sensor 1. Depending on the distance of the objects to the camera, a delay is experienced between the emission of the modulated light, for example, the so-called light pulses, and the reception at the camera of these light pulses. In an ordinary embodiment, the distance between the reflective objects and the camera can be determined as a function of the observed time delay and the constant value of the speed of light. In another more complex and reliable embodiment, a plurality of phase differences between the transmitted reference light pulses and the captured light pulses can be determined and used to evaluate the depth information as introduced by Robert Lange in the thesis. PhD titled "3D Time-of-Flight Distance Measurement with Custom Solid-state Image Sensors in CMOS / CCD Technology". A TOF camera system comprises several elements, each of which has a distinct function. 1) A first component of a TOF camera system is the lighting unit 18. By using pulses, the pulse width of each light pulse determines the field of the camera. views. For example, for a pulse width of 50 ns, the field is limited to 7.5 m. As a result, scene lighting becomes critical for the operation of a TOF camera system, and the high speed driving frequency requirements for lighting units require the use of light sources. specialized light sources such as light-emitting diodes (LEDs) or lasers to produce such short light pulses. 2) Another component of a TOF camera system is the imaging sensor 1 or TOF sensor. The imaging sensor conventionally comprises a matrix array of pixels forming an image of the scene. By pixel, one will understand the image element sensitive to bright electromagnetic radiation as well as its associated electronic circuits. Pixel output can be used to determine the flight time of light from the illumination unit to an object in the scene and reflected back from the object to the TOF imaging sensor. The flight time can be calculated in a separate processing unit that can be connected to the TOF sensor or can be directly integrated into the TOF sensor itself. Various methods are known for measuring the timing of light as it travels from the illumination unit to the object and from the object back to the imaging sensor. 3) Imaging optics 2 and processing electronics 4 are also provided within a TOF camera system. Imaging optics are designed to collect reflected light from objects in the scene, usually in the IR range, and filter light that is not in the same wavelength as the light emitted by the light. lighting unit. In some embodiments, the optics may allow infrared illumination capture for TOF principle measurements and visible illumination for RGB color measurements. The processing electronics controls the TOF sensor so that, among several features, the light of different frequencies is filtered from those emitted by the lighting unit, but having a similar wavelength (conventionally light of the sun). By filtering unwanted wavelengths or frequencies, the background light can be effectively suppressed. The processing electronics further includes control circuitry for both the illumination unit and the imaging sensor so that these components can be precisely controlled synchronously to ensure image capture. accurate is performed and a reliable depth map of the scene is determined. The choice of elements constituting a TOF camera system is crucial. TOF camera systems tend to cover wide ranges from a few millimeters up to several kilometers depending on the type and performance of the elements used. Such TOF camera systems may have a distance accuracy ranging from less than one centimeter to several centimeters or even meters. Technologies that can be used with TOF camera systems include pulsed light sources with digital time counters, radio frequency (RF) modulated light sources with phase detectors, and imagers at distance window. TOF camera systems suffer from several disadvantages. In TOF imagers or current TOF sensors, the pixel ranges usually range from 10 μm to 100 μm. Due to the novelty of the technology and the fact that the architecture of a TOF pixel is highly complex, it is difficult to design a small pixel while maintaining an effective signal-to-noise ratio (SNR) and keeping the spirit the requirement related to mass production at low prices. This results in relatively large chip sizes for the TOF image sensor. With conventional optics, such large image sensor sizes require the adjustment of large and thick optical stacks on the chip. In general, a compromise must be found between the required definition and the thickness of the device to make it integrable into the portable consumer product. In addition, the depth measurement obtained by a TOF camera system may be erroneously determined for several reasons. First, the definition of such systems needs to be improved. A large pixel size requires a large sensor chip and so the sensor definition is limited by the size of the TQF sensor. Second, the accuracy of the depth measurement of such systems still needs to be improved, among a plurality of parameters, as it strongly depends on the Signal to Noise ratio and the modulation frequency (the modulation frequency determining the accuracy of depth and depth range of operational depth). In particular, the uncertainty or inaccuracy of the depth measurement may be due to an effect called "depth aliasing" which will be described in detail later. In addition, the uncertainty may come from the presence of additional light in the background. Of course, the pixels of the TOF camera systems include a photosensitive member that receives incident light and transforms it into an electrical signal, for example, a current signal. During capture of a scene, if the background light is too intense in the wavelength at which the sensor is sensitive, then the pixels can receive additional unreflected light from objects at the same time. inside the scene, which can alter the measured distance. Now, in the field of TOF imaging, several options are available to at least partially overcome the major individual disadvantages that the technology may suffer, such as, for example, improved modulation frequency systems allowing for more depth measurement. robust and precise, mechanisms of deceleration or robustness of light of background. One solution remains to be proposed to address these disadvantages together and further improve the definition of the TOF camera systems while limiting the thickness of the complete system to make it compliant with the integration of mass-produced portable devices. Summary of the Invention The present invention relates to a camera system TOF comprising several cameras, at least one of the cameras being a TOF camera, in which the cameras are assembled on a common medium and take an image of the same scene and in which at least two cameras are controlled by different control parameters. The use of the at least one TOF camera depth information combined with the at least one information from another camera controlled with different parameters, merging all the information shooting together helps to refine and improve the quality of the resulting image, and in particular helps to obtain a higher quality depth map from the captured scene. Advantageously, the sensors of the cameras are assembled on a common support, which reduces the thickness of the TOF camera system. More advantageously, the sensors of the cameras are manufactured on the same substrate to improve the thickness and size of the TOF camera system. Preferably, the camera system TOF also further comprises a group of several lenses, each lens of the array being associated with each of the cameras. These lenses help focus the incident light onto the photosensitive area of their respective associated camera sensor. Advantageously, the control parameters comprise parameters for implementing a stereoscopic technique and / or for implementing a deceleration algorithm and / or for implementing a background light robustness mechanism. This is explained in this document below. More advantageously, at least two cameras of the TOF camera system can take an image of the same scene during different integration times. More advantageously, the TOF camera system may comprise two TOF cameras each having a TOF sensor taking an image of the same scene and being driven to determine distance information from radio frequencies. different modulation. More preferably, the TOF camera system may further include means for filtering light in the visible range and / or in the infrared. The use of such a means for filtering light allows the adjustment of light to choose the wavelength in the range at which each sensor must be sensitive. The present invention will be better understood on reading the description which follows, in the light of the accompanying drawings. Description of Drawings Fig. 1 shows the basic operational principle of a TOF camera system; FIG. 2 represents a stack of TOF with multiple lenses; FIG. 3 represents a standard TOF sensor used in a stack as represented in FIG. 2; FIG. 4 represents an optimized optimized TOF sensor for a stack as represented in FIG. 2; FIG. 5 represents a stack, as represented in FIG. 2, using four distinct TOF sensors; FIG. 6 shows a multi-lens TOF stack, also using color and infrared filters. Description of the Invention The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto. The drawings are only schematic and not limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn to scale for representative purposes. As shown in FIG. 1, a conventional TOF camera system comprises a TOF sensor 1 and its associated optical means 2 (for example a lens), a lighting unit 18 for illuminating the scene 15 with respect to concerns the TOF principle specifications, and electronic circuits 4 for controlling at least the lighting unit and the TOF sensor. The light is usually in the infrared wavelength range and includes periodically modulated pulses transmitted to the scene. The TOF sensor and its associated optical means are designed to capture the emitted modulated light that is reflected back from the scene. An option for determining distance information between the scene objects and the resulting TOF camera system is to determine the phase lag between the pulsed or modulated light emitted and the light received back at the sensor. TOF. In order to improve the quality and definition of a Time-Of-Flight image, namely the depth map, and to reduce the thickness of the TOF camera system, the present invention relates to a new camera system TOP comprising several cameras, at least one of the cameras being a TOF camera, in which the cameras are assembled on a support common and take an image of the same scene and in which at least two cameras are controlled by different control parameters. By camera, this means an electronic device system comprising at least the means for capturing the electromagnetic radiation of incident light. For example, a camera may be represented by at least one pixel of a sensor device. A camera may also be represented by a group of pixels on a sensor device or by an entire sensor device. Preferably, the sensor device from which the camera is determined comprises a matrix array of pixels and the circuits for implementing them. The circuits may further include electronic means for further processing the data measured by each pixel and / or each camera from the at least one sensor device used. The invention may also relate more generally to a TOF camera system comprising a plurality of independent cameras each having at least one sensor device, and among which at least one device comprises a sensor device TOF. We will now explain the invention with regard to a symmetrical configuration of a group of 4 cameras. It is worth noting at this point that aspects of the present invention are not limited to four cameras each associated with at least one lens, nor the symmetry shown in the examples used. A person skilled in the art could easily extrapolate the described principles to less, or more, lenses and cameras, for example two lenses associated with at least one sensor on which two cameras are defined, and / or differently configured viewpoints. In designing a camera system (TOF shooting comprising a plurality of cameras, at least one of the cameras being a TOF camera, several configurations are possible to arrange the camera recording devices. In Figure 2, a first configuration is shown with 4 lenses A, B, C, D (101 -104) on the top of a support, an image sensor plane 100. Each lens allows the light incidental incident from the reflected scene being focused on each individual camera of the image sensor plane, for example, in a particular embodiment, each lens concentrates captured light on each captured camera on a TOP image sensor The merging of the four individual images can offer a larger definition image with a smaller thickness than a larger TOF sensor system with a unique high-resolution camera. efinition. In FIGS. 3 to 5, there is shown a support, i.e., an image sensor plane 100, four cameras 107 and their associated circuits 110. Several possible configurations of the sensor circuits of FIG. image inside the media are displayed. 1) The first configuration, shown in Figure 3, is the simplest. Only one TOF image sensor device is used; it covers the four image areas 107 (i.e., the cameras) constructed or delimited by the four lenses 101 to 104. The image sensor circuits 110, including various digital blocks and / or or analog (signal conditioning, Analog-Digital Transformation, filtering, image sensor processing, ...), are, in this case, represented on the side of the image sensor and all the TOF pixels are grouped. An advantage of this approach is that existing TOF image sensor devices can be used for this principle. A disadvantage of this approach is that most of the TOF pixels between the regions 107 are not in the image plane of the optics 101 to 104 and are thus useless. Another disadvantage of this approach is that such a system will suffer from limited definition since an efficient TOF sensor device has a natively defined definition for a given size. Another disadvantage of this approach is that it only provides information based on the TOF principle from the scene ie a depth map and a trusted half-tone map or lighting. 2) A second possible configuration is shown in Figure 4, where several cameras are assembled on a common support (for example designed on the same silicon substrate). In this configuration, each camera is also covered by its own lens. Only cameras located in regions delimited by optics produce the images. In this way, the image sensor circuits can be allocated in the free space between the regions 107. In FIG. 4, the free space between the regions 107 can be seen as rectangular strips, forming a "cross" and wherein the electronic circuits for implementing the cameras can be adjusted to save silicon and minimize the size of the sensor system thus formed. As shown in Fig. 4, the resulting image sensor system is smaller in size than the image sensor system of Fig. 2. This second configuration optimizes card space and cost. It should be noted that, of course, the electronic circuits filling the free substrate space available between the cameras may be designed in other less optimal forms than a cross, for example in the form of a strip. 3) A third possible configuration is shown in FIG. 5, where four cameras (formed by four individual TOF image sensors) are positioned under the four lenses 101-104 of FIG. unique. In this configuration, each TOF sensor is covered by its own lens and is governed by its own circuits. With this approach, four individual camera calibrations and four mounting alignment steps are required. According to a first embodiment of the present invention, the camera system TOF may comprise several cameras, at least one of the cameras being a TOF camera, in which the cameras are assembled on a common medium and take an image of the same scene and in which at least two cameras are controlled by different driving parameters. The TOF camera system may be designed according to the previously discussed configurations. Preferably, the camera system TOF may be designed according to the configuration shown in Figure 4 in which the cameras are assembled on a common support. Preferably, the cameras are assembled on a common substrate. This substrate may be silicon-based, but the present invention is not limited thereto. The fact that the cameras are assembled on a common support and take an image of the same scene and that at least two cameras are controlled by different control parameters makes it possible in particular to obtain different types of cameras. different information from the same scene, such information being for example at least one of color information, lighting or depth map. Preferably, this information may be several depth maps of a given definition and optionally a color image preferably of a higher resolution. The fusion of the different information contained in each single image, namely the fusion of at least one depth map obtained according to the TOF principle with at least one other image containing at least depth information or color information, allows the calculation a single resulting image with improved quality. By "fusion", the combination of information related to individual images should be understood to produce the resulting enhanced and / or refined image or a "super-image" demonstrating at least one higher quality depth measurement for each single pixel. or a higher definition. Using this TOF camera system, it is possible to merge individual images into a "super-image", for example to merge 4 individual images. In a preferred embodiment, both depth definition and depth map information of the so-called "super-image" resulting from the merging is improved in comparison to the individual information produced from each of the individual images. unique. In a particular embodiment, at least one of the lenses of the lens array or at least one of the cameras of the TOF system may differ from one another in that the lens may deliver an image with a different focal length, and the cameras may be of a different size and / or a different definition. For example, a TOF camera system comprising two TOF cameras and two color cameras may have color cameras (respectively color sensors) of size and color. definition different from TOT cameras (respectively TOF sensors). The lens associated with the TOF camera may also be of a different focal length than that associated with color cameras. The scene observed by the TOF cameras and the color cameras being the same, the parameters associated with each kind of cameras, namely the definition, the focal length of the lens, the sizes sensor, can lead to different images captured by each kind of camera. For example, a depth map evaluated by the stereovision principle from the color images may represent a slightly different view of the scene imaged by the depth map obtained by at least one TOF camera. The driving parameters that can be implemented in the TOF camera system are presented below in this document, but are not limited thereto. In a particular embodiment, at least two of the cameras can be driven by parameters to implement a stereoscopic technique. Stereoscopy refers to a technique for creating or enhancing the illusion of depth in an image by means of binocular vision. In this technique, the binocular vision of a scene creates two slightly different images of the scene in both eyes, due to the different positions of the eyes on the head. These differences provide information that the brain can use to calculate depth in the visual scene, providing depth perception. In a particular embodiment, a passive stereo calculation may be used following the time-of-flight depth calculation, based on the combinations of at least two points of view of the present invention. This calculation can be very rough, to identify or solve the decoupling. Preferably, the most widely spaced regions 107, i.e. the most widely spaced cameras, may be used. In addition, preferably in the case of four pixels, the diagonal regions can be used to implement these driving parameters. In a derivative embodiment, at least two color cameras of the same definition can be used to provide an input to the depth measurement based on the stereoscopic principle with which the depth map produced from at least a TOF camera can be merged. In another embodiment derived from the present invention using a stereoscopic technique, at least two TOF cameras are each driven with different parameters to provide two depth maps of the same scene with different intrinsic measurement quality. These depth maps are merged together to provide a higher quality depth map than any of the two original individual depth charts. The TOF camera system may further use the two individual IR or IR lighting cards provided natively by the two TOF cameras to implement a stereoscopic map producing technique. depth from stereoscopic information that can be used to merge and refine at least one of the two depth maps from the TOF cameras, or the depth map produced by their fusion. Such an embodiment may be particularly suitable for obtaining, for example, an additional distance measuring range that the predetermined light pulse frequencies or lighting power can not achieve. In a particular embodiment in which at least one of the sensors is a TOF sensor to be implemented with respect to the TOF principle, at least two other sensors may be RGB sensors implemented with different parameters, having a definition higher and being used to determine a depth map from the principle of stereovision. This stereovision-based high resolution depth map can be used to merge with the lower definition depth map obtained from the TOF principle on the at least one TOF sensor. The depth map based on stereovision with holes and the lowest depth evaluation compared to a depth measurement on the TOF principle, the depth map obtained at the TOF camera can be used to refine the map of incomplete depth but of higher definition obtained by the principle of stereovision. Preferably, the fusion may be implemented within the circuits of the TOF camera system, and the resulting enhanced depth map may also include color information produced from the stereovision capture. This resultant enhanced image is of a definition at least similar to that of the high definition sensor, but can also be of a lower or higher resolution using interpolating calculation means from the state of the art. According to another embodiment, another control parameter that can be implemented on the cameras of the camera system TOF, and in particular on the cameras TOF of the camera system. TOF camera, is the use of different frequencies applied to pulsed lighting emitted and their synchronized captures hitting back from the scene on each individual TOF camera. This particular embodiment for driving the cameras differently is intended to apply the principle of depth measurement detasseling on the TOF measurements. In signal processing and related disciplines, aliasing refers to an effect that causes different signals to become indistinguishable when sampled. Temporal crenellation is when samples become indistinguishable over time. Time aliasing may occur when the periodically sampled signal also has periodic content. In systems implemented with the TOF principle, at a given modulation frequency, depth aliasing results in ambiguity regarding the distance to be recorded as the same distance can be measured for an object that is at different distances from the system. TOF camera that has a predetermined operating range. For example, a TOF camera system implemented with a single modulation frequency having an operating range of one meter to five meters, makes any object within six meters of the camera system. measured as one meter (periodic behavior), if there is sufficient back reflection of the modulated light on the camera. In a particular embodiment, at least one of the TOF cameras of the camera system TOF can be controlled by such a decoupling principle and more particularly by the algorithm or the method of linked deceleration. This at least one TOF camera can be implemented and controlled to measure distance information according to the TOF principle using at least two different frequencies and the distance measurement obtained by this TOF camera can be descaled according to the principle of décélelage. The distance measurements, in the form of a depth map, can then be merged with measured information from the other cameras of the camera system TOF, said other cameras being piloted with different parameters. For example, the other information may be at least one of a higher or lower resolution depth map produced from the stereovision principle or the TOF principle, and / or a color image. In a further preferred embodiment, different decoupling techniques may be implemented for the different cameras, i.e. the regions 107, giving even more robust deceleration advantages as each camera shooting gives different depth decremented measurements. Another example is a TOF camera system comprising at least two TOF cameras implemented with different parameters, said different parameters being the modulation frequency at which their respective capture is synchronized. The modulated lighting light may comprise at least two predetermined frequencies, a reference frequency and an additional frequency being for example three times lower than the reference frequency. A first TOF camera of the TOF camera system can be controlled in synchronization with the modulation frequency three times lower, while the other TOF camera of the camera system TOF views can be controlled in synchronization with the reference frequency. In this way, the two TOF cameras of the TOF camera system can simultaneously acquire crenellated depth measurements with unequivocal distance range, these depth measurements can be furthermore combined to provide a single decriminal depth map. This principle can be repeated if necessary, thus giving a very high unequivocal distance to the complete TOF camera system. In a derivative embodiment comprising at least one TOF camera implemented according to the TOF principle, the descreened depth map thus produced can be further merged with other measurements from at least one other camera. said other measurement being at least one of another depth map of the same definition produced from the TOF principle or the stereovision principle, a color map of the same definition, a depth map of higher definition produced from the principle TOF or the principle of stereovision, a map in color of higher definition. It will be appreciated that by using a plurality of frequencies, i.e. at least two, to implement the decoupling principle on TOF-based depth measurements, the higher the second frequency, the higher the accuracy of this second measure of depth is high. In this regard, if a TOF camera system comprising at least one TOF camera is implemented according to the principle of detent, and preferably if two TOF cameras are implemented. each with at least one frequency, so melting depth measurements can lead to a more accurate depth map. If, in addition, at least one of the cameras used with another piloting parameter is of higher resolution, the resulting image will include a higher resolution, a higher accuracy, and decreasing depth measurements. Even more preferably, the camera system may further include means for capturing color information, which means is characterized in that at least one of the cameras captures color information. Even more preferably, at least one of the cameras of the TOF camera system is a RGBZ camera such as a RGBZ sensor. In a further embodiment, different mechanisms of background light robustness can be implemented on the cameras. Quite often, by improving the robustness of background light, the noise or pixel gap can be increased. The use of background light robustness mechanisms on different regions 107, i.e. on cameras, can confer strong advantages. In a particular embodiment, at least one of the cameras of the system can be controlled by a background light robustness mechanism. This may have advantages for applications where only the definition of a single region 107 is needed in case of high background light. In a further embodiment, at least two cameras of the camera system TOF can be controlled with two different integration times. Indeed, a very short integration time gives a high robustness of movement, but also high standard deviations on the values of depth, mentioned in this document as depth noise. Therefore, one region 107 can be optimized for a short integration time while another region 107 can be optimized for noise performance. By merging the images and more particularly their associated information, the advantages of both configurations can be obtained and used. Advantageously, this embodiment makes it possible for each merged pixel to obtain reliable information concerning rapidly moving objects thanks to the TOF camera controlled by a short integration time, while inheriting weak information. noise from other cameras that are driven by longer integration times. In a derivative embodiment, the other cameras may include at least one other TOF camera controlled with a longer integration time. In another embodiment, the other cameras may comprise at least one other TOF camera controlled with a longer integration time and at least one color camera. In order to continue with a reliable merge of different information, the process must be implemented, in the circuits, or in a companion chip, or on a separate processing unit so as to transform the different sets of information each associated with a coordinate system in a single set of data having a single common predetermined coordinate system. Preferably, the common predetermined coordinate system will be the xy plane (for example the plane defined by the horizontal and vertical axes) of one of the cameras, for example the xy plane of the camera to be shot. HD. Data from the other camera, for example color images, depth map measurements, or halftone image of a TQF confidence map, is projected using the recording in an image associated with the common predetermined coordinate system. In particular, the image recording here involves the spatial recording of a target image, for example a highly accurate low resolution depth map obtained from a TOF measurement to align with a reference image. for example a low definition high resolution depth map obtained by stereovision and including color information. Several image recording methods can be used such as intensity-based or feature-based methods. Intensity-based methods can in particular compare intensity patterns in images via a correlation metric, whereas feature-based methods primarily attempt to find a match or match between image features such as points, lines, outlines and depth. Intensity-based methods aim at recording entire images or secondary images. If the secondary images are recorded, the corresponding secondary image centers are treated as corresponding feature points. Feature-based methods map a predetermined number of distinguishing points in the images. By knowing the correspondence between a number of points in the images, a transformation is then determined to match the target image with the reference images, thereby establishing a point-to-point correspondence between the reference and target images. This latter recording process may further include an interpolation technique as the images may have a different definition. In a particular preferred embodiment of the invention using image recording when multiple TOF cameras are used, or at least when the TOF camera system comprises at least one video recording apparatus. Taking pictures providing depth information, the depth information can be used to facilitate the merging of the images. Depth is a unique feature of a scene, in first order independent of point of view angle and / or light conditions. Therefore, it is a very stable metric to perform any alignment, pattern recognition, or any other means necessary in merging images. In a particular preferred embodiment, at least one of the cameras may be further calibrated, allowing other cameras to inherit this calibration. In Time-of-Flight imaging, careful calibration steps are required, such as absolute distance calibration, temperature, deformations, multipath resolutions, and more. Calibrating only one camera saves time because of the lower pixels and higher math that can be applied to calculate the calibration, other cameras can then take advantage and inherit the calibrated viewpoint to correct distance errors and / or nonlinearities. This calibration may be performed at the time of production, but it may also be performed at the time of operation, for example in a previously mentioned TOF camera system apparatus comprising four TOF cameras, sizing at least one of four viewpoints / cameras to be a much more stable imager, so it is used as a reference for calibration. According to a further embodiment of the invention, the TOF camera system may further comprise means for filtering light in the visible range and / or in the infrared. Colored filters can be implemented on the top of the cameras, as shown in Figure 6. In this figure, the areas R, G, B and IR are respectively for passing filters for the Red, Green , Blue and Infrared. This makes it possible to combine both RGB and depth data in a single image, allowing a merged or enhanced image combining all of these properties. However, a TOF camera system comprising at least one TOF camera and at least one other camera controlled with a different parameter can be characterized in that at least one of the cameras shooting is a RGBZ camera. A RGBZ camera is a multi-pixel camera, characterized in that the detection zones of said pixels collect at least one of Red, Green, Blue, preferably the three RGB colors, and further capture Infrared illumination from which depth information (Z) can be processed with respect to, for example, the TOF principle. In yet another embodiment, the pixels of at least one camera of the ATF camera system may further comprise nanocrystal films. Nanocrystals are nanoparticles of semiconducting materials with a diameter range of 2 to 10 nm. Nanocrystals have unique optical and electrical properties because of their small size; that is, their properties are different in nature from that of the corresponding bulk material. The main obvious property is the emission of photons under excitation (fluorescence), which can be visible to the human eye as a light or invisible by emitting in the infrared range. The wavelength of the emitted photons depends not only on the material of which the nanocrystal is made, but also on the size of the nanocrystal. The ability to precisely control the size of a nanocrystal allows the manufacturer to determine the wavelength of the emission, i.e. to determine the wavelength of light output. Nanocrystals can therefore be "tuned" during production to emit any desired wavelength. The ability to control, or "tune", the emission from the nanocrystal by changing its core size is called the "size quantization effect". The smaller the crystal, the closer to the blue end of the spectrum, and the larger the crystal, the closer to the red end. Nanocrystals can even be tuned beyond visible light, infrared or ultraviolet, using specific materials. Used as color filters, nanocrystal films may be designed to re-emit a wavelength in the range for which the sensor is more sensitive. Preferably, the emission wavelength of the nanocrystal films may be close to the maximum sensitivity of the sensor allowing a lower noise measurement. Translation of drawings:
权利要求:
Claims (14) [1] A camera system TOF (3) comprising a plurality of cameras (1, 107), at least one of the cameras being a TOF camera, in which the cameras images are assembled on a common medium (100) and take the image of the same scene (15) and in which at least two cameras are controlled by different control parameters. [2] The TOF camera system (3) according to claim 1, further comprising a lens array (101 to 104), each lens of the array being associated with each of the cameras. [3] The TOF camera system (3) according to claim 1 or 2, wherein the driving parameters comprise parameters for implementing a stereoscopic technique. [4] The TOF camera system (3) according to any one of claims 1 to 3, wherein the control parameters comprise parameters for implementing an aliasing algorithm. [5] The TOF camera system (3) according to any of claims 1 to 4, wherein the driving parameters comprise parameters for implementing a background light robustness mechanism. [6] The TOF camera system (3) according to any one of claims 1 to 5, wherein at least two cameras take an image of the same scene during different integration times. [7] The TOF camera system (3) according to any one of claims 1 to 6, further comprising image recording means for recording together images provided by the cameras. [8] The TOF camera system (3) according to any one of claims 1 to 7, further comprising means for calibrating at least one camera at the time of execution. [9] The TOF camera system (3) according to claim 8, wherein at least one of the calibrated cameras is used as a reference for calibrating the other cameras. [10] The TOF camera system (3) according to any one of claims 1 to 9, wherein the scene-taking cameras provide at least one of color information, lighting or depth. [11] The TOF camera system (3) according to any one of claims 1 to 10, further comprising means for merging or combining together the information provided by the cameras in the form of an improved image, said improved image being characterized by comprising at least one of a higher resolution or a higher depth measurement accuracy. [12] The TOF camera system (3) according to any one of claims 1 to 11, further comprising means for filtering light in the visible range and / or in the Infrared range. [13] 13. TOF camera system (3) according to any one of claims 1 to 12, wherein the cameras are assembled on the same substrate. [14] The TOF camera system (3) according to claim 13, wherein the substrate is silicon-based.
类似技术:
公开号 | 公开日 | 专利标题 BE1022488B1|2016-05-04|TIME-OF-FLIGHT VIEWING APPARATUS SYSTEM JP6260006B2|2018-01-17|IMAGING DEVICE, IMAGING SYSTEM USING THE SAME, ELECTRONIC MIRROR SYSTEM, AND RANGING DEVICE US20140168424A1|2014-06-19|Imaging device for motion detection of objects in a scene, and method for motion detection of objects in a scene CA2592293C|2015-04-14|Method for processing images using automatic georeferencing of images derived from a pair of images captured in the same focal plane US20130088620A1|2013-04-11|Method of controlling a system including an image sensor and a light source JP2020515811A|2020-05-28|System for characterizing the surroundings of a vehicle JP2018513967A|2018-05-31|Depth field imaging apparatus, method and application CA2688942A1|2010-06-18|Sensor and imaging system for the remote detection of an object EP2795575A1|2014-10-29|Integrated three-dimensional vision sensor EP3627210A1|2020-03-25|System for viewing in virtual or augmented reality with eye image sensor, and associated method WO2016097609A1|2016-06-23|System for three-dimensional image capture while moving Wang2018|High resolution 2D imaging and 3D scanning with line sensors WO2017149248A1|2017-09-08|Stereoscopic image-capturing apparatus FR2966257A1|2012-04-20|METHOD AND APPARATUS FOR CONSTRUCTING A RELIEVE IMAGE FROM TWO-DIMENSIONAL IMAGES JP7028878B2|2022-03-02|A system for measuring the distance to an object JP2019529957A|2019-10-17|System for measuring the distance to an object FR3088790A1|2020-05-22|Apparatus and method for observing a scene comprising a target US20190019829A1|2019-01-17|Image sensor RU2543688C2|2015-03-10|Camera and optical system for obtaining 3d images | FR3039914A1|2017-02-10|METHOD AND DEVICE FOR DELETING PARASITE INFORMATION IN IMAGES OF AT LEAST ONE SPECULAR LIGHTED OPTICAL MIRE, AND LOCATION SYSTEM COMPRISING SAID DEVICE JP2010147879A|2010-07-01|Imaging apparatus
同族专利:
公开号 | 公开日 US20220070436A1|2022-03-03| US10638118B2|2020-04-28| JP2016529491A|2016-09-23| US11172186B2|2021-11-09| EP2890125A1|2015-07-01| JP6480441B2|2019-03-13| WO2015097284A1|2015-07-02| EP2890125B1|2021-10-13| KR20160045670A|2016-04-27| CN105432080B|2019-07-26| US10397552B2|2019-08-27| US20200236342A1|2020-07-23| US20200007850A1|2020-01-02| US20160295193A1|2016-10-06| CN105432080A|2016-03-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2010139628A1|2009-06-04|2010-12-09|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Device and method for recording a plant| US20120249744A1|2011-04-04|2012-10-04|Primesense Ltd.|Multi-Zone Imaging Sensor and Lens Array| JP3175317B2|1992-05-07|2001-06-11|ソニー株式会社|Distance measuring device| JP3631325B2|1996-06-19|2005-03-23|オリンパス株式会社|3D image input device| EP1040366B1|1997-12-23|2003-10-08|Siemens Aktiengesellschaft|Method and device for recording three-dimensional distance-measuring images| US6353673B1|2000-04-27|2002-03-05|Physical Optics Corporation|Real-time opto-electronic image processor| US7994465B1|2006-02-06|2011-08-09|Microsoft Corporation|Methods and devices for improved charge management for three-dimensional and color sensing| WO2002049366A1|2000-12-14|2002-06-20|3Dv Systems, Ltd.|3d camera| KR100871564B1|2006-06-19|2008-12-02|삼성전기주식회사|Camera module| JP4802891B2|2006-06-27|2011-10-26|トヨタ自動車株式会社|Distance measuring system and distance measuring method| US8629976B2|2007-10-02|2014-01-14|Microsoft Corporation|Methods and systems for hierarchical de-aliasing time-of-flight systems| US7791715B1|2006-10-02|2010-09-07|Canesta, Inc.|Method and system for lossless dealiasing in time-of-flight systems| JP4452951B2|2006-11-02|2010-04-21|富士フイルム株式会社|Distance image generation method and apparatus| JP2008164367A|2006-12-27|2008-07-17|Matsushita Electric Ind Co Ltd|Solid body imaging device, camera, vehicle and surveillance device| JP2008286527A|2007-05-15|2008-11-27|Panasonic Corp|Compound eye distance measuring device| JP2009047497A|2007-08-17|2009-03-05|Fujifilm Corp|Stereoscopic imaging device, control method of stereoscopic imaging device, and program| JP2009139995A|2007-12-03|2009-06-25|National Institute Of Information & Communication Technology|Unit and program for real time pixel matching in stereo image pair| US8203699B2|2008-06-30|2012-06-19|Microsoft Corporation|System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed| CN101321302B|2008-07-08|2010-06-09|浙江大学|Three-dimensional real-time acquisition system based on camera array| KR101483462B1|2008-08-27|2015-01-16|삼성전자주식회사|Apparatus and Method For Obtaining a Depth Image| CN102461186B|2009-06-25|2015-09-23|皇家飞利浦电子股份有限公司|Stereoscopic image capturing method, system and video camera| US8803967B2|2009-07-31|2014-08-12|Mesa Imaging Ag|Time of flight camera with rectangular field of illumination| US8648702B2|2010-08-20|2014-02-11|Denso International America, Inc.|Combined time-of-flight and image sensor systems| US9194953B2|2010-10-21|2015-11-24|Sony Corporation|3D time-of-light camera and method| KR20120056668A|2010-11-25|2012-06-04|한국전자통신연구원|Apparatus and method for recovering 3 dimensional information| EP2487504A1|2011-02-10|2012-08-15|Technische Universität München|Method of enhanced depth image acquisition| CN103502839B|2011-03-17|2016-06-15|加泰罗尼亚科技大学|For receiving the system of light beam, method and computer program| CN103477186B|2011-04-07|2016-01-27|松下知识产权经营株式会社|Stereo photographic device| US9720089B2|2012-01-23|2017-08-01|Microsoft Technology Licensing, Llc|3D zoom imager| RU2012154657A|2012-12-17|2014-06-27|ЭлЭсАй Корпорейшн|METHODS AND DEVICE FOR COMBINING IMAGES WITH DEPTH GENERATED USING DIFFERENT METHODS FOR FORMING IMAGES WITH DEPTH| US9329272B2|2013-05-02|2016-05-03|Infineon Technologies Ag|3D camera and method of image processing 3D images| EP3008490B1|2013-06-13|2019-10-16|Koninklijke Philips N.V.|Detector for radiotherapy treatment guidance and verification| US9609859B2|2013-09-13|2017-04-04|Palo Alto Research Center Incorporated|Unwanted plant removal system having a stabilization system| EP2890125B1|2013-12-24|2021-10-13|Sony Depthsensing Solutions|A time-of-flight camera system| KR102251483B1|2014-10-23|2021-05-14|삼성전자주식회사|Electronic device and method for processing image| US10836379B2|2018-03-23|2020-11-17|Sf Motors, Inc.|Multi-network-based path generation for vehicle parking|KR20150010230A|2013-07-18|2015-01-28|삼성전자주식회사|Method and apparatus for generating color image and depth image of an object using singular filter| EP2890125B1|2013-12-24|2021-10-13|Sony Depthsensing Solutions|A time-of-flight camera system| KR101582726B1|2013-12-27|2016-01-06|재단법인대구경북과학기술원|Apparatus and method for recognizing distance of stereo type| US20160205378A1|2015-01-08|2016-07-14|Amir Nevet|Multimode depth imaging| EP3330739A4|2015-07-31|2018-08-15|Panasonic Intellectual Property Management Co., Ltd.|Range imaging device and solid-state imaging device| WO2017025885A1|2015-08-07|2017-02-16|King Abdullah University Of Science And Technology|Doppler time-of-flight imaging| US10021284B2|2015-08-27|2018-07-10|Samsung Electronics Co., Ltd.|Epipolar plane single-pulse indirect TOF imaging for automotives| US9880267B2|2015-09-04|2018-01-30|Microvision, Inc.|Hybrid data acquisition in scanned beam display| CN106612387B|2015-10-15|2019-05-21|杭州海康威视数字技术股份有限公司|A kind of combined depth figure preparation method and depth camera| US10151838B2|2015-11-24|2018-12-11|Microsoft Technology Licensing, Llc|Imaging sensor with shared pixel readout circuitry| JP6823815B2|2015-12-08|2021-02-03|パナソニックIpマネジメント株式会社|Solid-state image sensor, distance measuring device and distance measuring method| US10229502B2|2016-02-03|2019-03-12|Microsoft Technology Licensing, Llc|Temporal time-of-flight| US9866816B2|2016-03-03|2018-01-09|4D Intellectual Properties, Llc|Methods and apparatus for an active pulsed 4D camera for image acquisition and analysis| DK3466068T3|2016-06-03|2021-02-15|Utku Buyuksahin|A SYSTEM AND A METHOD FOR TAKING AND GENERATING 3D IMAGES| US11102467B2|2016-08-25|2021-08-24|Facebook Technologies, Llc|Array detector for depth mapping| US10962647B2|2016-11-30|2021-03-30|Yujin Robot Co., Ltd.|Lidar apparatus based on time of flight and moving object| WO2019039733A1|2017-08-21|2019-02-28|유진로봇|Moving object and combined sensor using camera and lidar| US10810753B2|2017-02-27|2020-10-20|Microsoft Technology Licensing, Llc|Single-frequency time-of-flight depth computation using stereoscopic disambiguation| DE102017204073A1|2017-03-13|2018-09-13|Osram Gmbh|TOF CAMERA, MOTOR VEHICLE, METHOD FOR MANUFACTURING A TOF CAMERA, AND METHOD FOR DETERMINING A DISTANCE TO AN OBJECT| EP3483092A1|2017-11-13|2019-05-15|Dennis Eagle Ltd.|Garbage truck| US10942274B2|2018-04-11|2021-03-09|Microsoft Technology Licensing, Llc|Time of flight and picture camera| WO2019219687A1|2018-05-14|2019-11-21|Ams International Ag|Using time-of-flight techniques for stereoscopic image processing| CN109238163B|2018-08-22|2020-04-21|Oppo广东移动通信有限公司|Time-of-flight module, control method thereof, controller and electronic device| EP3844948A1|2018-08-30|2021-07-07|Veo Robotics, Inc.|Depth-sensing computer vision system| WO2020045770A1|2018-08-31|2020-03-05|Samsung Electronics Co., Ltd.|Method and device for obtaining 3d images| KR20200073694A|2018-12-14|2020-06-24|삼성전자주식회사|Apparatus comprising multi-camera and image processing method thereof| US10876889B2|2019-02-12|2020-12-29|Viavi Solutions Inc.|Sensor device and method of use| US11263995B2|2019-03-21|2022-03-01|Boe Technology Group Co., Ltd.|Display device, electronic equipment, and method for driving display device| WO2020198730A1|2019-03-28|2020-10-01|Rensselaer Polytechnic Institute|Patient monitoring system| EP3855162A1|2020-01-21|2021-07-28|Omya International AG|Lwir imaging system for detecting an amorphous and/or crystalline structure of phosphate and/or sulphate salts on the surface of a substrate or within a substrate and use of the lwir imaging system| KR102354158B1|2021-01-14|2022-01-21|박천수|Multi Phase correlation Vector Synthesis Ranging Method and apparatus|
法律状态:
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 EP13199564.9A|EP2890125B1|2013-12-24|2013-12-24|A time-of-flight camera system| EP131995649|2013-12-24| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|